7 research outputs found
Online Learning for Time Series Prediction
In this paper we address the problem of predicting a time series using the
ARMA (autoregressive moving average) model, under minimal assumptions on the
noise terms. Using regret minimization techniques, we develop effective online
learning algorithms for the prediction problem, without assuming that the noise
terms are Gaussian, identically distributed or even independent. Furthermore,
we show that our algorithm's performances asymptotically approaches the
performance of the best ARMA model in hindsight.Comment: 17 pages, 6 figure
Budget-Constrained Item Cold-Start Handling in Collaborative Filtering Recommenders via Optimal Design
It is well known that collaborative filtering (CF) based recommender systems
provide better modeling of users and items associated with considerable rating
history. The lack of historical ratings results in the user and the item
cold-start problems. The latter is the main focus of this work. Most of the
current literature addresses this problem by integrating content-based
recommendation techniques to model the new item. However, in many cases such
content is not available, and the question arises is whether this problem can
be mitigated using CF techniques only. We formalize this problem as an
optimization problem: given a new item, a pool of available users, and a budget
constraint, select which users to assign with the task of rating the new item
in order to minimize the prediction error of our model. We show that the
objective function is monotone-supermodular, and propose efficient optimal
design based algorithms that attain an approximation to its optimum. Our
findings are verified by an empirical study using the Netflix dataset, where
the proposed algorithms outperform several baselines for the problem at hand.Comment: 11 pages, 2 figure
Recommended from our members
Online Learning for Adversaries with Memory: Price of Past Mistakes
The framework of online learning with memory naturally captures learning problems with temporal effects, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement the theoretic results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics
Recommended from our members
Online Time Series Prediction with Missing Data
We consider the problem of time series prediction in the presence of missing data. We cast the problem as an online learning problem in which the goal of the learner is to minimize prediction error. We then devise an efficient algorithm for the problem, which is based on autoregressive model, and does not assume any structure on the missing data nor on the mechanism that generates the time series. We show that our algorithm’s performance asymptotically approaches the performance of the best AR predictor in hindsight, and corroborate the theoretic results with an empirical study on synthetic and real-world data
Online time series prediction with missing data. In
Abstract We consider the problem of time series prediction in the presence of missing data. We cast the problem as an online learning problem in which the goal of the learner is to minimize prediction error. We then devise an efficient algorithm for the problem, which is based on autoregressive model, and does not assume any structure on the missing data nor on the mechanism that generates the time series. We show that our algorithm's performance asymptotically approaches the performance of the best AR predictor in hindsight, and corroborate the theoretic results with an empirical study on synthetic and real-world data
Online Learning for Adversaries with Memory: Price of Past Mistakes
Abstract The framework of online learning with memory naturally captures learning problems with temporal effects, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement the theoretical results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics